FAST TRACK COMMUNICATION A closed-form expression for the Sharma–Mittal entropy of exponential families
نویسندگان
چکیده
The Sharma–Mittal entropies generalize the celebrated Shannon, Rényi and Tsallis entropies. We report a closed-form formula for the Sharma–Mittal entropies and relative entropies for arbitrary exponential family distributions. We explicitly instantiate the formula for the case of the multivariate Gaussian distributions and discuss on its estimation. Q1 Q2 PACS numbers: 03.65.Ta, 03.67.−a (Some figures may appear in colour only in the online journal) The Sharma–Mittal entropy Hα,β (p) [1, 2] of a probability density4 p is defined as Hα,β (p) = 1 1 − β ((∫ p(x) dx ) 1−β 1−α − 1 ) ,with α > 0, α = 1 and β = 1. (1) This bi-parametric family of entropies tends in limit cases to Rényi entropies Rα(p) = 1 1−α log ∫ p(x) dx (for β → 1), Tsallis entropies Tα(p) = 1 1−α ( ∫ p(x) dx − 1) (for β → α) and Shannon entropy H(p) = − ∫ p(x) log p(x) dx (for both α, β → 1). The Sharma–Mittal entropy has previously been studied in the context of multi-dimensional harmonic oscillator systems [3]. Many usual statistical distributions including the Gaussians and discrete multinomials (that is, normalized histograms) belong to the exponential families [4]. Those exponential families play a major role in the field of thermo-statistics [5] and admit the generic canonical decomposition pF (x|θ ) = exp (〈θ, t(x)〉 − F(θ )+ k(x)) , (2) 4 For the sake of simplicity and without loss of generality, we consider the probability density function p of a continuous random variable X ∼ p in this note. For multivariate densities p, the integral notation ∫ denotes the corresponding multi-dimensional integral, so that we write for short ∫ p(x) dx = 1. Our results hold for probability mass functions and probability measures in general. 1751-8113/12/000000+08$33.00 © 2012 IOP Publishing Ltd Printed in the UK & the USA 1 J. Phys. A: Math. Theor. 45 (2012) 000000 Fast Track Communication where 〈·, ·〉 denotes the inner product, F is a strictly convex C∞ function characterizing the family (called the log normalizer since F(θ ) = log ∫ e〈θ,t(x)〉+k(x)dx), θ ∈ is the natural parameter denoting the member of the family EF = {pF (x|θ ) | θ ∈ }, t(x) is the sufficient statistics and k(x) is an auxiliary carrier measure [4]. The natural parameter space = {θ | pF (x; θ ) < ∞} is an open convex set. For example, the probability density of a multivariate Gaussian p ∼ N(μ, ) centered at μ with a positive-definite covariance matrix is conventionally written as p(x|μ, ) = 1 (2π) d 2 √| | exp− (x − μ) T −1(x − μ) 2 , (3) where | | > 0 denotes the determinant of the positive-definite matrix. Rewriting the density of equation (3) to fit the canonical decomposition of equation (2), we obtain p(x|μ, ) = exp (− 2 x −1x + x −1μ− 2μ −1μ− 2 log(2π) | |) . (4) Using the matrix trace cyclic property, we have − 2 xT −1x = tr(− 2 xT −1x) = tr(xxT × (− 2 −1)), where tr denotes the matrix trace operator. It follows that p(x|μ, ) = exp (〈(x, xx ), ( −1μ,− 2 −1)〉− F(θ )) , (5) = p(x|θ ), (6) with θ = ( −1μ,− 2 −1) and F(θ ) = 2 log(2π)d | | + 2μ −1μ (and k(x) = 0). In this decomposition, the natural parameter θ = ( −1μ,− 2 −1) = (v,M) consists of two parts: a vectorial part v and a symmetric negative definite matrix part M ≺ 0. The inner product of θ = (v,M) and θ ′ = (v′,M′) is defined as 〈θ, θ ′〉 = vTv′ + tr(MT M′). For univariate normal distributions, the natural parameter θ is ( μ σ 2 ,− 1 2σ 2 ). The order of the exponential family is the dimension of its natural parameter space . Normal d-dimensional distributions N(μ, ) form an exponential family of order d + d(d+1) 2 = d(d+3) 2 . We have M = − 2 −1, that is, | −1| = | |−1 = | − 2M| and μT = − 2v M−1 (since M−1 = −2 , − 2 M−1v = v = μ and M−T = M−1). It follows that the log normalizer F expressed using the canonical natural parameters is F(μ, ) = 2 log(2π) | | + 2μ −1μ, (7) F(v,M) = d 2 log 2π − 1 2 log | − 2M| − 1 4 v M−1v. (8) In order to calculate the Sharma–Mittal entropy of equation (1), let Mα(p) = ∫ p(x)dx so that Hα,β (p) = 1 1 − β (Mα(p) 1−β 1−α − 1). (9) Let us prove that for an arbitrary exponential family EF = {pF (x|θ ) | θ ∈ }, Mα(p) = e )−αF(θ )Ep[e(α−1)k(x)] (10)
منابع مشابه
A closed-form expression for the Sharma-Mittal entropy of exponential families
The Sharma-Mittal entropies generalize the celebrated Shannon, Rényi and Tsallis entropies. We report a closed-form formula for the Sharma-Mittal entropies and relative entropies for arbitrary exponential family distributions. We instantiate explicitly the formula for the case of the multivariate Gaussian distributions and discuss on its estimation. PACS numbers: 03.65.Ta, 03.67.-a Submitted to...
متن کاملCapacity Bounds and High-SNR Capacity of the Additive Exponential Noise Channel With Additive Exponential Interference
Communication in the presence of a priori known interference at the encoder has gained great interest because of its many practical applications. In this paper, additive exponential noise channel with additive exponential interference (AENC-AEI) known non-causally at the transmitter is introduced as a new variant of such communication scenarios. First, it is shown that the additive Gaussian ch...
متن کاملMultiple vacation policy for MX/Hk/1 queue with un-reliable server
This paper studies the operating characteristics of an MX/Hk/1 queueing system under multiple vacation policy. It is assumed that the server goes for vacation as soon as the system becomes empty. When he returns from a vacation and there is one or more customers waiting in the queue, he serves these customers until the system becomes empty again, otherwise goes for another vacation. The brea...
متن کاملModification of the Fast Global K-means Using a Fuzzy Relation with Application in Microarray Data Analysis
Recognizing genes with distinctive expression levels can help in prevention, diagnosis and treatment of the diseases at the genomic level. In this paper, fast Global k-means (fast GKM) is developed for clustering the gene expression datasets. Fast GKM is a significant improvement of the k-means clustering method. It is an incremental clustering method which starts with one cluster. Iteratively ...
متن کاملA step beyond Tsallis and Rényi entropies
Tsallis and Rényi entropy measures are two possible different generalizations of the Boltzmann-Gibbs entropy (or Shannon’s information) but are not generalizations of each others. It is however the Sharma-Mittal measure, which was already defined in 1975 (B.D. Sharma, D.P. Mittal, J.Math.Sci 10, 28) and which received attention only recently as an application in statistical mechanics (T.D. Fran...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2011